Okay. I suppose a lot of you are either at home watching the stream or you're watching
the video on demand. Either way, this week we will talk about the exercises. In regards
of the exercises, the last two weeks were a little troublesome for us as well. So unfortunately,
we haven't been able to correct the sheets for sheet number two and well, sheet number
three was due to today. So we will probably give you notice during this week for last
weeks last sheet, sheet number two and sheet number three. So yeah, so you get some feedback
for that as well. But unfortunately, it was a little bit stressful, so we were quite unfortunately
unable to do that. Either way, sheet number two looked like this. So we have a numerical
problem like this. We have a time derivative on the left side and we have something on
the right hand side and we have a starting value f. And now we want to solve this PVE
here for two different terms for df either the Peron-Amalek equivalent or the total variation
equivalent. Yeah, so in order to do that, what you usually do is you need some chalk.
This is a disadvantage of, I don't know if the camera is following me maybe, I don't
know if the camera is actually catching the blackboard. Otherwise you have to ask the
persons in this room, I don't know, this one, no, this is the wrong one, I need this one.
Yes, very nice. Okay, so we have a problem like this. So it's dTu equals the divergence
of this operator d tilde and we have the gradient in absolute values times the gradient u and
we have that for t greater than zero. So what we do is basically we discretize the time
step so what we have is ux of t plus some time step tau minus ux,t divided by this time
step tau which should be sufficiently small. And then we have the right hand side which
is this one. And now we can like put everything, reorder everything so we have like an explicit
step for the next time step. So we get u of x,t plus tau equals u of x,t plus and now
we have tau times the right hand side. Now this is quite straightforward actually in
this case because we have everything at hand and everything in this formula can be discretized
and written out. So how does it look in MATLAB? So I hope to make it easier for you to understand
how to do it. So basically what I do or how I start my programs as usual I just start
with loading the image. I try to keep that in the foreground now. So I just load the
image. And in this case I derive some gradients, the discretized gradients. So I have a function
for that. You have to like properly define the gradients for the image. So I have a function
for that. You have to like properly derive these and they usually look like this so I
derive this now for, I can show you an example for a small resolution. So for example if
we have a 10 times 10 image then I want to have gradient matrices that look more like
this. So this is a function I wrote myself. I use for my day to day
gradient stuff so a discretized version of that. So if I take a look at this, so the
gradient, why does this pop up here? It's a sparse matrix so I have to make it full
again. So it looks like this. And this is what you would expect from a gradient in X
direction. So basically this is a matrix you have to, you can multiply on a vectorized
version of your 10 times 10 image. So basically what you have is if you take a 10 times 10
pixel image then you vectorize it so you put everything just under each other and then
linear operator like the gradient can be derived like this. So you have basically for the first
pixel you take the difference of the first and the second pixel. For the second pixel
you take the, and so on. And for the last pixel in that row you need to be a little
bit careful. In this case I do nothing at all. And then if you go to the second column
you have to derive well another set of pixels. So basically with these matrices I'm able
to perform a matrix vector multiplication which performs a gradient descent in X and
Y direction. So the Y direction is a little bit more interesting because you have to be
a little bit more careful due to the distances. So basically you take the difference, the
difference of two pixels which are in different columns. So in the vectorized version the
distance of these two columns are, these two diagonals are a little bit spread out. Well
are spread out by the number of pixels in X direction basically. So okay, so for my
Presenters
Zugänglich über
Offener Zugang
Dauer
01:37:58 Min
Aufnahmedatum
2022-07-26
Hochgeladen am
2022-07-27 23:49:07
Sprache
de-DE